31 research outputs found

    Transport congestion events detection (TCED): towards decorrelating congestion detection from TCP

    Get PDF
    TCP (Transmission Control Protocol) uses a loss-based algorithm to estimate whether the network is congested or not. The main difficulty for this algorithm is to distinguish spurious from real network congestion events. Other research studies have proposed to enhance the reliability of this congestion estimation by modifying the internal TCP algorithm. In this paper, we propose an original congestion event algorithm implemented independently of the TCP source code. Basically, we propose a modular architecture to implement a congestion event detection algorithm to cope with the increasing complexity of the TCP code and we use it to understand why some spurious congestion events might not be detected in some complex cases. We show that our proposal is able to increase the reliability of TCP NewReno congestion detection algorithm that might help to the design of detection criterion independent of the TCP code. We find out that solutions based only on RTT (Round-Trip Time) estimation are not accurate enough to cover all existing cases. Furthermore, we evaluate our algorithm with and without network reordering where other inaccuracies, not previously identified, occur

    Understanding the impact of TFRC feedbacks frequency over long delay links

    Get PDF
    TFRC is a transport protocol specifically designed to carry multimedia streams. TFRC does not enable a reliable and in order data delivery services. However the mechanism is designed to be friendly with TCP flows and thus, enables a control congestion algorithm. This congestion control relies in a feedback mechanism allowing receivers to communicate to the senders an experienced drop rate. Several studies attempted to adapt TFRC to a wide range of network conditions and topologies. Although the current TFRC RFC writes that there is little gain from sending a large number of feedback messages per RTT, recent studies have shown that in long-delay contexts, such as satellite-based networks, the performance of TFRC can be greatly improved by increasing the feedback frequency. Nevertheless, currently it is not clear how and why this increase may improve the performance of TFRC. Therefore, in this paper, we aim at understanding the impact that multiple feedback per RTT may have (i) on the key parameters of TFRC (RTT, drop rate, and sending rate) and (ii) on the network parameters (reactiveness and link utilization).We also provide a detailed description of the micro-mechanisms at the origin of the improvements of the TFRC behaviour when multiple feedback per RTT are delivered, and determine the context where such feedback frequencies should be applied

    Towards an incremental deployment of ERN protocols: a proposal for an E2E-ERN hybrid protocol

    Get PDF
    We propose an architecture based on a hybrid E2E-ERN approach to allow incremental deployment of ERN (Explicit Rate Notification) protocols in heterogeneous networks. The proposed IP-ERN architecture combines E2E (End-to-End)and ERN protocols and uses the minimum between both congestion windows to perform. Without introducing complex operation, the resulting E2E-ERN protocol provides inter and intra protocol fairness and benefits from all ERN protocol advantages when possible. We detail the principle of this novel IP-ERN architecture and show that this architecture is highly adaptive to the network dynamic and is compliant with IPv4, IPv6 as well as IP-in-IP tunneling solutions

    SatERN: a PEP-less solution for satellite communications

    Get PDF
    In networks with very large delay like satellite IPbased networks, standard TCP is unable to correctly grab the available resources. To overcome this problem, Performance Enhancing Proxies (PEPs), which break the end-to-end connection and simulate a receiver close enough to the sender, can be placed before the links with large delay. Although splitting PEPs does not modify the transport protocol at the end nodes, they prevent the use of security protocols such as IPsec. In this paper, we propose solutions to replace the use of PEPs named SatERN. This proposal, based on Explicit Rate Notification (ERN) protocols over IP, does not split connections and is compliant with IP-in-IP tunneling solutions. Finally, we show that the SatERN solution achieves high satellite link utilization and fairness of the satellite traffic

    XCP-i : "eXplicit Control Protocol" pour l'interconnexion de réseaux haut-débit hétérogènes

    Get PDF
    eXplicit Control Protocol (XCP) est un protocole de transport qui contrôle efficacement l'évolution de la fenêtre de congestion de l'émetteur, évitant ainsi la phase de slow-start et d'évitement de congestion. XCP nécessite cependant la collaboration de tous les routeurs sur le chemin de la source vers le récepteur, ce qui est pratiquement impossible à réaliser en réalité, sinon ses performances peuvent être beaucoup moins bonnes que celles de TCP. Cette forte dépendance de XCP en des routeurs spécialisés limite considérablement l'intérêt de déployer des routeurs XCP sur une portion du réseau. Nous proposons dans cet article une extension de XCP, appelée XCP-i, qui permet d'interconnecter des nuages XCP avec des nuages non-XCP sans perdre le bénéfice du contrôle précis de XCP sur la fenêtre de congestion. Les résultats de simulation sur des topologies correspondant typiquement à des scénarios de déploiement incrémental montrent que les performances de XCP-i sont bien supérieures à celles de TCP sur des liens à haut-débit

    Collaborative Traffic Measurement in Virtualized Data Center Networks

    Get PDF
    International audienceData center network monitoring can be carried out at hardware networking equipment (e.g. physical routers) and/or software networking equipment (e.g. virtual switches). While software switches offer high flexibility to deploy various monitoring tools, they have to utilize server resources, esp. CPU and memory, that can no longer be reserved fully to service users' traffic. In this paper we closely examine the costs of (i) sampling packets ; (ii) sending them to a user-space program for measurement; and (iii) forwarding them to a remote server where they will be processed in case of lack of resources locally. Starting from empirical observations, we derive an analytical model to accurately predict (R 2 = 99.5%) the three aforemen-tioned costs, as a function of the sampling rate. We next introduce a collaborative approach for traffic monitoring and sampling that maximizes the amount of collected traffic without impacting the data center's operation. We analyze, through numerical simulations, the performance of our collaborative solution. The results show that it is able to take advantage of the uneven loads on the servers to maximize the amount of traffic that can be sampled at the scale of a data center. The resulting gain reaches 200% compared to a non collaborative approach

    On the Cost of Measuring Traffic in a Virtualized Environment

    Get PDF
    International audienceThe current trend in application development and deployment is to package applications and services within containers or virtual machines. This results in a blend of virtual and physical resources with complex network interconnection schemas mixing virtual and physical switches along with specific protocols to build virtual networks spanning over several servers. While the complexity of this setup is hidden by private/public cloud management solutions, e.g. OpenStack, this new environment constitutes a challenge when it comes to monitor and debug performance related issues. In this paper, we introduce the problem of measuring traffic in a virtualized environment and focus on one typical scenario, namely virtual machines interconnected with a virtual switch. For this scenario, we assess the cost of continuously measuring the network traffic activity of the machines. Specifically, we seek to estimate the competition that exists to access the physical resources (e.g., CPU) of the physical server between the measurement task and the legacy application activity

    Bringing Energy Aware Routing closer to Reality with SDN Hybrid Networks

    Get PDF
    Energy aware routing aims at reducing the energy consumption of ISP networks. The idea is to adapt routing to the traffic load in order to turn off some hardware. However, it implies to make dynamic changes to routing configurations which is almost impossible with legacy protocols. The Software Defined Network (SDN) paradigm bears the promise of allowing a dynamic optimization with its centralized controller.In this work, we propose SENAtoR, an algorithm to enable energy aware routing in a scenario of progressive migration from legacy to SDN hardware. Since in real life, turning off network equipments is a delicate task as it can lead to packet losses, SENAtoR provides also several features to safely enable energy saving services: tunneling for fast rerouting, smooth node disabling and detection of both traffic spikes and link failures.We validate our solution by extensive simulations and by experimentation. We show that SENAtoR can be progressively deployed in a network using the SDN paradigm. It allows to reduce the energy consumption of ISP networks by 5 to 35% depending on the penetration of SDN hardware, while, strikingly, diminishing the packet loss rate compared to legacy protocols

    MINNIE: an SDN World with Few Compressed Forwarding Rules

    Get PDF
    Software Defined Networking (SDN) is gaining momentum with the support of major manufacturers. While it brings flexibility in the management of flows within the data center fabric, this flexibility comes at the cost of smaller routing table capacities. Indeed, the Ternary Content Addressable Memory (TCAM) needed by SDN devices has smaller capacities than CAMs used in legacy hardware. In this paper, we investigate compression techniques to maximize the utility of SDN switches forwarding tables. We validate our algorithm, called \algo, with intensive simulations for well-known data center topologies, to study its efficiency and compression ratio for a large number of forwarding rules. Our results indicate that \algo scales well, being able to deal with around a million of different flows with less than 1000 forwarding entry per SDN switch, requiring negligible computation time. To assess the operational viability of MINNIE in real networks, we deployed a testbed able to emulate a k=4 fat-tree data center topology. We demonstrate on one hand, that even with a small number of clients, the limit in terms of number of rules is reached if no compression is performed, increasing the delay of new incoming flows. MINNIE, on the other hand, reduces drastically the number of rules that need to be stored, with no packet losses, nor detectable extra delays if routing lookups are done in ASICs.Hence, both simulations and experimental results suggest that \algo can be safely deployed in real networks, providing compression ratios between 70% and 99%

    Too many SDN rules? Compress them with MINNIE

    Get PDF
    International audienceSoftware Defined Networking (SDN) is gaining momentum with the support of major manufacturers. While it brings flexibility in the management of flows within the data center fabric, this flexibility comes at the cost of smaller routing table capacities. In this paper, we investigate compression techniques to reduce the forwarding information base (FIB) of SDN switches. We validate our algorithm, called MINNIE, on a real testbed able to emulate a 20 switches fat tree architecture. We demonstrate that even with a small number of clients, the limit in terms of number of rules is reached if no compression is performed, increasing the delay of all new incoming flows. MINNIE, on the other hand, reduces drastically the number of rules that need to be stored with a limited impact on the packet loss rate. We also evaluate the actual switching and reconfiguration times and the delay introduced by the communications with the controller
    corecore